loadModel

fun loadModel(modelType: OnnxModelType<*>, vararg executionProviders: ExecutionProvider = arrayOf(ExecutionProvider.CPU())): OnnxInferenceModel

Loads ONNX model from android resources. By default, the model is initialized with ExecutionProvider.CPU execution provider.

Parameters

modelType

model type from ONNXModels

executionProviders

execution providers for model initialization.

open override fun <T : InferenceModel, U : InferenceModel> loadModel(modelType: ModelType<T, U>, loadingMode: LoadingMode): T

It's equivalent to loadModel with ExecutionProvider.CPU execution provider.

Parameters

modelType

model type from ONNXModels

loadingMode

it's ignored